Search Results: "zw"

25 September 2016

Vincent Sanders: I'll huff, and I'll puff, and I'll blow your house in

Sometimes it really helps to have a different view on a problem and after my recent writings on my Public Suffix List (PSL) library I was fortunate to receive a suggestion from my friend Enrico Zini.

I had asked for suggestions on reducing the size of the library further and Enrico simply suggested Huffman coding. This was a technique I had learned about long ago in connection with data compression and the intervening years had made all the details fuzzy which explains why it had not immediately sprung to mind.

A small subset of the Public Suffix List as stored within libnspslHuffman coding named for David A. Huffman is an algorithm that enables a representation of data which is very efficient. In a normal array of characters every character takes the same eight bits to represent which is the best we can do when any of the 256 values possible is equally likely. If your data is not evenly distributed this is not the case for example if the data was english text then the value is fifteen times more likely to be that for e than k.

every step of huffman encoding tree build for the example string tableSo if we have some data with a non uniform distribution of probabilities we need a way the data be encoded with fewer bits for the common values and more bits for the rarer values. To be efficient we would need some way of having variable length representations without storing the length separately. The term for this data representation is a prefix code and there are several ways to generate them.

Such is the influence of Huffman on the area of prefix codes they are often called Huffman codes even if they were not created using his algorithm. One can dream of becoming immortalised like this, to join the ranks of those whose names are given to units or whole ideas in a field must be immensely rewarding, however given Huffman invented his algorithm and proved it to be optimal to answer a question on a term paper in his early twenties I fear I may already be a bit too late.

The algorithm itself is relatively straightforward. First a frequency analysis is performed, a fancy way of saying count how many of each character is in the input data. Next a binary tree is created by using a priority queue initialised with the nodes sorted by frequency.

The resulting huffman tree and the binary representation of the input symbols
The two least frequent items count is summed together and a node placed in the tree with the two original entries as child nodes. This step is repeated until a single node exists with a count value equal to the length of the input.

To encode data once simply walks the tree outputting a 0 for a left node or 1 for right node until reaching the original value. This generates a mapping of values to bit output, the input is then simply converted value by value to the bit output. To decode the data the data is used bit by bit to walk the tree to arrive at values.

If we perform this algorithm on the example string table *!asiabvcomcoopitamazonawsarsaves-the-whalescomputebasilicata we can reduce the 488 bits (61 * 8 bit characters) to 282 bits or 40% reduction. Obviously in a real application the huffman tree would need to be stored which would probably exceed this saving but for larger data sets it is probable this technique would yield excellent results on this kind of data.

Once I proved this to myself I implemented the encoder within the existing conversion program. Although my perl encoder is not very efficient it can process the entire PSL string table (around six thousand labels using 40KB or so) in less than a second, so unless the table grows massively an inelegant approach will suffice.

The resulting bits were packed into 32bit values to improve decode performance (most systems prefer to deal with larger memory fetches less frequently) and resulted in 18KB of output or 47% of the original size. This is a great improvement in size and means the statically linked test program is now 59KB and is actually smaller than the gzipped source data.

$ ls -alh test_nspsl
-rwxr-xr-x 1 vince vince 59K Sep 25 23:58 test_nspsl
$ ls -al public_suffix_list.dat.gz
-rw-r--r-- 1 vince vince 62K Sep 1 08:52 public_suffix_list.dat.gz

To be clear the statically linked program can determine if a domain is in the PSL with no additional heap allocations and includes the entire PSL ordered tree, the domain label string table and the huffman decode table to read it.

An unexpected side effect is that because the decode loop is small it sits in the processor cache. This appears to cause the string comparison function huffcasecmp() (which is not locale dependant because we know the data will be limited ASCII) performance to be close to using strcasecmp() indeed on ARM32 systems there is a very modest improvement in performance.

I think this is as much work as I am willing to put into this library but I am pleased to have achieved a result which is on par with the best of breed (libpsl still has a data representation 20KB smaller than libnspsl but requires additional libraries for additional functionality) and I got to (re)learn an important algorithm too.

20 September 2016

Vincent Sanders: If I see an ending, I can work backward.

Now while I am sure Arthur Miller was referring to writing a play when he said those words they have an oddly appropriate resonance for my topic.

In the early nineties Lou Montulli applied the idea of magic cookies to HTTP to make the web stateful, I imagine he had no idea of the issues he was going to introduce for the future. Like most of the web technology it was a solution to an immediate problem which it has never been possible to subsequently improve.

Chocolate chip cookie are much tastier than HTTP cookiesThe HTTP cookie is simply a way for a website to identify a connecting browser session so that state can be kept between retrieving pages. Due to shortcomings in the design of cookies and implementation details in browsers this has lead to a selection of unwanted side effects. The specific issue that I am talking about here is the supercookie where the super prefix in this context has similar connotations as to when applied to the word villain.

Whenever the browser requests a resource (web page, image, etc.) the server may return a cookie along with the resource that your browser remembers. The cookie has a domain name associated with it and when your browser requests additional resources if the cookie domain matches the requested resources domain name the cookie is sent along with the request.

As an example the first time you visit a page on www.example.foo.invalid you might receive a cookie with the domain example.foo.invalid so next time you visit a page on www.example.foo.invalid your browser will send the cookie along. Indeed it will also send it along for any page on another.example.foo.invalid

A supercookies is simply one where instead of being limited to one sub-domain (example.foo.invalid) the cookie is set for a top level domain (foo.invalid) so visiting any such domain (I used the invalid name in my examples but one could substitute com or co.uk) your web browser gives out the cookie. Hackers would love to be able to set up such cookies and potentially control and hijack many sites at a time.

This problem was noted early on and browsers were not allowed to set cookie domains with fewer than two parts so example.invalid or example.com were allowed but invalid or com on their own were not. This works fine for top level domains like .com, .org and .mil but not for countries where the domain registrar had rules about second levels like the uk domain (uk domains must have a second level like .co.uk).

 NetSurf cookie manager showing a supercookieThere is no way to generate the correct set of top level domains with an algorithm so a database is required and is called the Public Suffix List (PSL). This database is a simple text formatted list with wildcard and inversion syntax and is at time of writing around 180Kb of text including comments which compresses down to 60Kb or so with deflate.

A few years ago with ICANN allowing the great expansion of top level domains the existing NetSurf supercookie handling was found to be wanting and I decided to implement a solution using the PSL. At this point in time the database was only 100Kb source or 40Kb compressed.

I started by looking at limited existing libraries. In fact only the regdom library was adequate but used 150Kb of heap to load the pre-processed list. This would have had the drawback of increasing NetSurf heap usage significantly (we still have users on 8Mb systems). Because of this and the need to run PHP script to generate the pre-processed input it was decided the library was not suitable.

Lacking other choices I came up with my own implementation which used a perl script to construct a tree of domains from the PSL in a static array with the label strings in a separate table. At the time my implementation added 70Kb of read only data which I thought reasonable and allowed for direct lookup of answers from the database.

This solution still required a pre-processing step to generate the C source code but perl is much more readily available, is a language already used by our tooling and we could always simply ship the generated file. As long as the generated file was updated at release time as we already do for our fallback SSL certificate root set this would be acceptable.

wireshark session shown NetSurf sending a co.uk supercookie to bbc.co.uk
I put the solution into NetSurf, was pleased no-one seemed to notice and moved on to other issues. Recently while fixing a completely unrelated issue in the display of session cookies in the management interface and I realised I had some test supercookies present in the display. After the initial "thats odd" I realised with horror there might be a deeper issue.

It quickly became evident the PSL generation was broken and had been for a long time, even worse somewhere along the line the "redundant" empty generated source file had been removed and the ancient fallback code path was all that had been used.

This issue had escalated somewhat from a trivial display problem. I took a moment to asses the situation a bit more broadly and came to the conclusion there were a number of interconnected causes, centered around the lack of automated testing, which could be solved by extracting the PSL handling into a "support" library.

NetSurf has several of these support libraries which could be used separately to the main browser project but are principally oriented towards it. These libraries are shipped and built in releases alongside the main browser codebase and mainly serve to make API more obvious and modular. In this case my main aim was to have the functionality segregated into a separate module which could be tested, updated and monitored directly by our CI system meaning the embarrassing failure I had found can never occur again.

Before creating my own library I did consider a library called libpsl had been created since I wrote my original implementation. Initially I was very interested in using this library given it managed a data representation within a mere 32Kb.

Unfortunately the library integrates a great deal of IDN and punycode handling which was not required in this use case. NetSurf already has to handle IDN and punycode translations and uses punycode encoded domain names internally only translating to unicode representations for display so duplicating this functionality using other libraries requires a great deal of resource above the raw data representation.

I put the library together based on the existing code generator Perl program and integrated the test set that comes along with the PSL. I was a little alarmed to discover that the PSL had almost doubled in size since the implementation was originally written and now the trivial test program of the library was weighing in at a hefty 120Kb.

This stemmed from two main causes:
  1. there were now many more domain label strings to be stored
  2. there now being many, many more nodes in the tree.
To address the first cause the length of the domain label strings was moved into the unused padding space within each tree node removing a byte from each domain label saving 6Kb. Next it occurred to me that while building the domain label string table that if the label to be added already existed as a substring within the table it could be elided.

The domain labels were sorted from longest to shortest and added in order searching for substring matches as the table was built this saved another 6Kb. I am sure there are ways to reduce this further I have missed (if you see them let me know!) but a 25% saving (47Kb to 35Kb) was a good start.

The second cause was a little harder to address. The structure representing nodes in the tree I started with was at first look reasonable.

struct pnode  
uint16_t label_index; /* index into string table of label */
uint16_t label_length; /* length of label */
uint16_t child_node_index; /* index of first child node */
uint16_t child_node_count; /* number of child nodes */
;

I examined the generated table and observed that the majority of nodes were leaf nodes (had no children) which makes sense given the type of data being represented. By allowing two types of node one for labels and a second for the child node information this would halve the node size in most cases and requiring only a modest change to the tree traversal code.

The only issue with this would be that a way to indicate a node has child information. It was realised that the domain labels can have a maximum length of 63 characters meaning their length can be represented in six bits so a uint16_t was excessive. The space was split into two uint8_t parts one for the length and one for a flag to indicate child data node followed.

union pnode  
struct
uint16_t index; /* index into string table of label */
uint8_t length; /* length of label */
uint8_t has_children; /* the next table entry is a child node */
label;
struct
uint16_t node_index; /* index of first child node */
uint16_t node_count; /* number of child nodes */
child;
;

static const union pnode pnodes[8580] =
/* root entry */
.label = 0, 0, 1 , .child = 2, 1553 ,
/* entries 2 to 1794 */
.label = 37, 2, 1 , .child = 1795, 6 ,

...

/* entries 8577 to 8578 */
.label = 31820, 6, 1 , .child = 8579, 1 ,
/* entry 8579 */
.label = 0, 1, 0 ,

;

This change reduced the node array size from 63Kb to 33Kb almost a 50% saving. I considered using bitfields to try and reduce the label length and has_children flag into a single byte but such packing will not reduce the length of a node below 32bits because it is unioned with the child structure.

A possibility of using the spare uint8_t derived by bitfield packing to store an additional label node in three other nodes was considered but added a great deal of complexity to node lookup and table construction for saving around 4Kb so was not incorporated.

With the changes incorporated the test program was a much more acceptable 75Kb reasonably close to the size of the compressed source but with the benefits of direct lookup. Integrating the libraries single API call into NetSurf was straightforward and resulted in correct operation when tested.

This episode just reminded me of the dangers of code that can fail silently. It exposed our users to a security problem that we thought had been addressed almost six years ago and squandered the limited resources of the project. Hopefully a lesson we will not have to learn again any time soon. If there is a positive to take away it is that the new implementation is more space efficient, automatically built and importantly tested

3 September 2016

Thorsten Alteholz: Openzwave in Debian

It was a real surprise when I saw activity on #791965, which is my ITP bug to package openzwave. As Ralph wrote, the legal status of the Z-Wave standard has been changed. According to a press release of Sigma Designs, the Z-Wave standard is now put into the public domain. As even the specification of the Z-Wave S2 security application framework is available now, the openzwave community is finally able to create a really compatible application which might also pass the Z-Wave certification. Thus there is new hope that there will be an openzwave package in Debian.

30 August 2016

Mike Gabriel: credential-sheets: User Account Credential Sheets Tool

Preface This little piece of work has been pending on my todo list for about two years now. For our local school project "IT-Zukunft Schule" I wrote the little tool credential-sheets. It is a little Perl script that turns a series of import files (CSV format) as they have to be provided for user mass import into GOsa (i.e. LDAP) into a series of A4 sheets with little cards on them, containing initial user credential information. The upstream sources are on Github and I have just uploaded this little tool to Debian. Introduction After mass import of user accounts (e.g. into LDAP) most site administrators have to create information sheets (or snippets) containing those new credentials (like username, password, policy of usage, etc.). With this tiny tool, providing these pieces of information to multiple users, becomes really simple. Account data is taken from a CSV file and the sheets are output as PDF using easily configurable LaTeX template files. Usage Synopsis: credential-sheets [options] <CSV-file-1> [<CSV-file-2> [...]] Options The credential-sheets command accepts the following command-line options:
   --help Display a help with all available command line options and exit.
   --template=<tpl-name>
          Name of the template to use.
   --cols=<x>
          Render <x> columns per sheet.
   --rows=<y>
          Render <y> rows per sheet.
   --zip  Do create a ZIP file at the end.
   --zipfilename=<zip-file-name>
          Alternative ZIP file name (default: name of parent folder).
   --debug
          Don't remove temporary files.
CSV File Column Arrangement The credential-sheets tool can handle any sort of column arrangement in given CSV file(s). It expects the CSV file(s) to have column names in their first line. The given column names have to map to the VAR-<column-name> placeholders in credential-sheets's LaTeX templates. The shipped-with templates (students, teachers) can handle these column names: If you create your own templates, you can be very flexible in using your own column names and template names. Only make sure that the column names provided in the CSV file(s)'s first line match the variables used in the customized LaTeX template. Customizations The shipped-with credential sheets templates are expected to be installed in /usr/share/credential-sheets/ for system-wide installations. When customizing templates, simply place a modified copy of any of those files into ~/.credential-sheets/ or /etc/credential-sheets/. For further details, see below. The credential-sheets tool uses these configuration files: Search paths for configuration files (in listed order): You can easily customize the resulting PDF files generated with this tool by placing your own template files, header and footer where appropriate. Dependencies This project requires the following dependencies: Copyright and License Copyright 2012-2016, Mike Gabriel <mike.gabriel@das-netzwerkteam.de>. Licensed under GPL-2+ (see COPYING file).

26 July 2016

Iustin Pop: More virtual cycling

Last weekend I had to stay at home, so I did some more virtual training (slowly, in order to not overwork myself again). This time, after all the Zwift, I wanted to test something else: Tacx Trainer Software. Still virtual, but of a different kind. The difference between Zwift, which does video-game-like worlds, is that TTS, in the configuration that I used, uses a real-life video which scrolls faster or slower, based on your speed. This speed adjustment is so-so, but the appeal was that I could ride roads that I actually know and drove before. Modern technology++! And this was the interesting part: I chose for the first ride the road up to Cap de Formentor, which is one of my favourite places in Mallorca. The road itself is also nice, through some very pleasant woods and with some very good viewpoints, ending at the lighthouse, from where you have wonderful views of the sea. Now, I've driven two times on this road, so I kind of remembered it, but driving a road and cycling the same road, especially when it goes up and down and up, are very different things. I remembered well the first uphill (after the flat area around Port de Pollen a), but after that my recollection of how much uphill the road goes was slightly off, and I actually didn't remember that there's that much downhill, which was a very pleasant surprise. I did remember the view points (since I took quite a few pictures along the road), but otherwise I was completely off about the height profile of the road. Interesting how the brain works Overall, this is considered a "short" ride in Tacx's film library; it was 21Km, 835m uphill, and I did it in 1h11m, which for me, after two weeks of no sports, was good enough. Also Tacx has bike selection, and I did this on a simulated mountain bike, with the result that downhill speeds were quite slow (max. 57Km/h, at a -12% grade), so not complaining at all. Next I'll have to see how the road to Sa Calobra is in the virtual world. And next time I go to Mallorca (when/if), I'll have to actually ride these in the real world. In the meantime, some pictures from an actual trip there. I definitely recommend visiting this, preferably early in the morning (it's very crowded): Infinite blue Sea, boats and mountains Mountains, vegetation and a bit of sea View towards El Colomer A few more pictures and larger sizes here.

20 July 2016

Daniel Pocock: How many mobile phone accounts will be hijacked this summer?

Summer vacations have been getting tougher in recent years. Airlines cut into your precious vacation time with their online check-in procedures and a dozen reminder messages, there is growing concern about airport security and Brexit has already put one large travel firm into liquidation leaving holidaymakers in limbo. If that wasn't all bad enough, now there is a new threat: while you are relaxing in the sun, scammers fool your phone company into issuing a replacement SIM card or transferring your mobile number to a new provider and then proceed to use it to take over all your email, social media, Paypal and bank accounts. The same scam has been appearing around the globe, from Britain to Australia and everywhere in between. Many of these scams were predicted in my earlier blog SMS logins: an illusion of security (April 2014) but they are only starting to get publicity now as more aspects of our lives are at risk, scammers are ramping up their exploits and phone companies are floundering under the onslaught. With the vast majority of Internet users struggling to keep their passwords out of the wrong hands, many organizations have started offering their customers the option of receiving two-factor authentication codes on their mobile phone during login. Rather than making people safer, this has simply given scammers an incentive to seize control of telephones, usually by tricking the phone company to issue a replacement SIM or port the number. It also provides a fresh incentive for criminals to steal phones while cybercriminals have been embedding code into many "free" apps to surreptitiously re-route the text messages and gather other data they need for an identity theft sting. Sadly, telephone networks were never designed for secure transactions. Telecoms experts have made this clear numerous times. Some of the largest scams in the history of financial services exploited phone verification protocols as the weakest link in the chain, including a $150 million heist reminiscent of Ocean's 11. For phone companies, SMS messaging came as a side-effect of digital communications for mobile handsets. It is less than one percent of their business. SMS authentication is less than one percent of that. Phone companies lose little or nothing when SMS messages are hijacked so there is little incentive for them to secure it. Nonetheless, like insects riding on an elephant, numerous companies have popped up with a business model that involves linking websites to the wholesale telephone network and dressing it up as a "security" solution. These companies are able to make eye-watering profits by "purchasing" text messages for $0.01 and selling them for $0.02 (one hundred percent gross profit), but they also have nothing to lose when SIM cards are hijacked and therefore minimal incentive to take any responsibility. Companies like Google, Facebook and Twitter have thrown more fuel on the fire by encouraging and sometimes even demanding users provide mobile phone numbers to "prove they are human" or "protect" their accounts. Through these antics, these high profile companies have given a vast percentage of the population a false sense of confidence in codes delivered by mobile phone, yet the real motivation for these companies does not appear to be security at all: they have worked out that the mobile phone number is the holy grail in cross-referencing vast databases of users and customers from different sources for all sorts of creepy purposes. As most of their services don't involve any financial activity, they have little to lose if accounts are compromised and everything to gain by accurately gathering mobile phone numbers from as many users as possible.
Can you escape your mobile phone while on vacation? Just how hard is it to get a replacement SIM card or transfer/port a user's phone number while they are on vacation? Many phone companies will accept instructions through a web form or a phone call. Scammers need little more than a user's full name, home address and date of birth: vast lists of these private details are circulating on the black market, sourced from social media, data breaches (99% of which are never detected or made public), marketing companies and even the web sites that encourage your friends to send you free online birthday cards. Every time a company has asked me to use mobile phone authentication so far, I've opted out and I'll continue to do so. Even if somebody does hijack my phone account while I'm on vacation, the consequences for me are minimal as it will not give them access to any other account or service, can you and your family members say the same thing? What can be done?
  • Opt-out of mobile phone authentication schemes.
  • Never give the mobile phone number to web sites unless there is a real and pressing need for them to call you.
  • Tell firms you don't have a mobile phone or that you share your phone with your family and can't use it for private authentication.
  • If you need to use two-factor authentication, only use technical solutions such as smart cards or security tokens that have been engineered exclusively for computer security. Leave them in a locked drawer or safe while on vacation. Be wary of anybody who insists on SMS and doesn't offer these other options.
  • Rather than seeking to "protect" accounts, simply close some or all social media accounts to reduce your exposure and eliminate the effort of keeping them "secure" and updating "privacy" settings.
  • If your bank provides a relationship manager or other personal contact, this
    can also provide a higher level of security as they get to know you.
Previous blogs on SMS messaging, security and two factor authentication, including my earlier blog SMS Logins: an illusion of security.

12 July 2016

Norbert Preining: Michael K hlmeier: Zwei Herren am Strand

This recent book of the Austrian author Michael K hlmeier, Zwei Herren am Strand (Hanser Verlag), spins a story about an imaginative friendship between Charlie Chaplin and Winston Churchill. While there might not as be more different people than these two, in the book they are connected by a common fight the fight against their own depression, explicitly as well as implicitly by fighting Nazi Germany. Zwei Herren am Strand_ Roman - Michael Koehlmeier Michael K hlmeier s recently released book Zwei Herren am Strand tells the fictive story of Charlie Chaplin and Winston Churchill meeting and becoming friends, helping each other fighting depression and suicide thoughts. Based on a bunch of (fictive) letters of a (fictive) private secretary of Churchill, as well as (fictive) book on Chaplin, the first person narrator dives into the interesting time of the mid-20ies to about the Second World War. churchill-chaplinChaplin is having a hard time after the divorce from his wife Rita, paired with the difficulties at the production of The Circus, and is contemplating suicide. He is conveying this fact to Churchill during a walk on the beach. Churchill is reminded of his own depressions he suffers from early age on. The two of them agree to make a pact fighting the Black Dog inside. Later Churchill asks Chaplin about his method to overcome the phases of depression, and Chaplin explains him the Method of the Clown : Put a huge page of paper on the floor, lie yourself facing down onto the paper and start writing a letter to yourself while rotating clockwise and creating a spiral inward. According to Chaplin, he took this method from Buster Keaton and Harold Lloyd (hard to verify), and it works by making oneself ridiculous, so that one part of oneself can laugh about the other part. The story continues into the early stages of the world war, with both sides fighting Hitler, one politically, one by comedy. The story finishes somewhere in the middle when the two meet while Chaplin is in a deep depression during cutting his movie
The great dictator, and together to manage once more to overcome the black dog .
The book is pure fiction, and K hlmeier dives into a debaucherous story telling, jumping back and forth between several strands of narration lines. An entertaining and very enjoyable book if you are the type of reader that enjoys story telling. For me this book is in best tradition of Michael K hlmeier, whom I consider an excellent story teller. I loved his (unfinished trilogy of) books on Greek mythology (Telemach and Calypso), but found that after these books he got lost too much in radio programs of story telling. While in itself good, I preferred his novels. Thus, I have to admit that I have forgotten about K hlmeier for some years, until recently I found this little book, which reminded me of him and his excellent stories. A book that is if you are versed in German well worth enjoying, especially if one likes funny and a bit queer stories.

6 July 2016

Mike Gabriel: [Arctica Project] Release of nx-libs (version 3.5.99.0)

Introduction NX is a software suite which implements very efficient compression of the X11 protocol. This increases performance when using X applications over a network, especially a slow one. NX (v3) has been originally developed by NoMachine and has been Free Software ever since. Since NoMachine obsoleted NX (v3) some time back in 2013/2014, the maintenance has been continued by a versatile group of developers. The work on NX (v3) is being continued under the project name "nx-libs". History Until January 2015, nx-libs was more mainly a redistribution approach of the original NX (v3) software. We (we as in mainly a group of X2Go developers) kept changes applied to the original NoMachine sources as minimal as possible. We also kept the original files and folders structure. Patches had been maintained via the quilt utility on top of a Git repository, the patches had always been kept separate. That was the 3.5.0.x series of nx-libs. In January 2015, the characteristics of nx-libs as maintained by the X2Go project between 2011 and 2015 changed. A decision was reached: This effort is now about to be released as "nx-libs 3.6.0.0". Contributors Since 2011, the nx-libs code base has to a great extent been maintained in the context of the X2Go Project [1]. Qindel Group joining in... In 2014, developers from the Qindel Group [2] joined the nx-libs maintenance. They found X2Go's work on nx-libs and suggested joining forces as best as possible on hacking nx-libs. The Arctica Project comming up... Starting in January 2015, the development on the 3.6.x branch of the project was moved into a new project called the Arctica Project [3]. Development Funding by Qindel In September 2015, a funding project was set up at Qindel. Since then, the Qindel group greatly supports the development of nx-libs 3.6.x monetarily and with provided man power. The funding project officially is a cooperation between Qindel and DAS-NETZWERKTEAM (business run by Mike Gabriel, long term maintainer of nx-libs). The funding is split into two subprojects and lasts until August 2017: The current nx-libs development effort is coordinated in the context of the Arctica Project (by Mike Gabriel), with use cases in Arctica, X2Go and TheQVD (VDI product worked on at Qindel) in mind. People involved There are various people involved in nx-libs 3.6.x maintenance and development, some of them shall be explicitly named here (in alphabetical order of surnames): Some other people have contributed, but have left the project already. Thanks for your work on nx-libs: A big thanks to everyone involved!!! Special thanks go to Stefan Baur for employing Mihai Moldovan and handling all the bureaucracy, so that Mihai can work on this project and get funded for his work. Achievements of nx-libs 3.5.99.0 We are very close to our self-defined release goal 3.6.0. The below tasks have already been (+/-) completed for version 3.5.99.0: In a previous blog post [4], the code reduction in nx-libs has already been discussed. With this announcement, we want to give a status update about our effort of shrinking the nx-libs code base:
    [mike@minobo nx-libs (3.6.x)]$ cloc --match-f '.*\.(c cpp h)' .
        1896 text files.
        1896 unique files.                                          
        7430 files ignored.
    http://cloc.sourceforge.net v 1.60  T=5.88 s (307.3 files/s, 143310.5 lines/s)
    -------------------------------------------------------------------------------
    Language                     files          blank        comment           code
    -------------------------------------------------------------------------------
    C                              958          68574          74891         419638
    C/C++ Header                   730          25683          33957         130418
    C++                            120          17007          11721          61292
    -------------------------------------------------------------------------------
    SUM:                          1808         111264         120569         611348
    -------------------------------------------------------------------------------
The previous statistics had these sums in the last line. First the nx-libs 3.5.0.x code tree (where we came from):
    -------------------------------------------------------------------------------
    SUM:                          5614         329279         382337        1757743
    -------------------------------------------------------------------------------
Then the nx-libs 3.6.x status as it was on May 9th 2016:
    -------------------------------------------------------------------------------
    SUM:                          1922         118581         126783         662635
    -------------------------------------------------------------------------------
Another 50.000 lines of code have been removed over the past two months. Work pending for the final 3.6.0 release goal Known Issues There are several open issues on the nx-libs Github project space, see https://github.com/ArcticaProject/nx-libs/issues. Testing the nx-libs 3.5.99.0 We are currently working on provisioning release builds and nightly builds of nx-libs for various recent Linux distributions. Please stay tuned and watch Mike Gabriel's blog [5]. We already have nightly builds of nx-libs for Debian and Ubuntu [6], but there are more to come soon. Until now, please use the build recipes provided in the README.md file of the nx-libs source tree [7]. References

3 July 2016

Iustin Pop: A relaxation week

A (forced) relaxation week This was an interesting week, much more so than I expected. The start of the week was the usual: on Monday a run, although at an easier pace after Sunday's longer indoor bike ride, on Tuesday a 30Km outside bike ride (flat, on road, with a mountain bike so not fast at all). On Wednesday however, I had a planned "intervention" at my dentist bone reconstruction (or regeneration, not sure what the right term is for the implantation of scaffolding). The dentist told me I won't be allowed to do sports, especially in the first few days after the procedure, so I knew I will have to take it easy; easy bike rides are fine, but not anything more (e.g. especially not running). The procedure went well and after that I went to work (the dentist looked at me in a funny way when I mentioned I'm not going home but instead back to work). There was a bit of pain a couple of hours after the local anaesthesia went away, but the painkillers did work, so I was able to function somewhat OK. Laughing was the only thing that caused pain, so I tried to be very serious; didn't work well On Thursday morning however, I did feel funny and when I looked into the mirror, I got a shock. The affected side of my face was heavily swollen, and I was feeling as bad as I looked. I had a followup checkup at the dentist, so I went there, and they told me Oh, this is normal. Bone reconstruction is much more difficult on the body as opposed to extraction, since the body actually has to rebuild stuff, instead of just healing the wound. And yes, you should just go back home and take the day off! . OK, logically that explanation makes sense, but my dental extraction had a very predictable pain/recovery curve (spike right at the extraction, plateau for that day, then slow recovery that went to faster recovery after a few days). This procedure was very different, with the first day easy, and the second day much worse. The dentist continued Oh, and by the way, expect this to be worse in the morning, as the body can work all night; also, this should go away by itself over the weekend, so let's meet again on Monday. At this point I realised than I'm not allowed to do sports is not by doctor's orders, but rather my condition doesn't allow me to do sport . Sad panda Friday was even worse; my face was swollen in a different way, such that I looked even more like a monster from the Witcher games. I had to stay at home again, not being able to do much, as the painkillers I got were mostly ineffective. From my usual ~10K steps a day (or more if I run), my Friday was a paltry sub-2K step day. The only thing I was able to do was watch anime. I found Log Horizon to be a pretty interesting anime, much more so than what the synopsis said; the ramification on politics and how to interaction between the two cultures unfolded was much more in-depth than I presumed. Didn't finish it yet, so this is a partial but very strong recommendation for it. Besides watching stuff, I also went to the shop to buy some food, which turned out to be an excuse for "junk food foraging!". The pain took my willpower away and instead of the planned and short grocery list, I found myself with lots of chocolate and ice cream on my hands. Funny how the brain works On Saturday I was a bit better; the swelling went partially away, so if you squinted you could pretend I look my normal-ugly, not the monster-ugly from before. I was able to go outside of the house, do some shopping, etc. so I was able to go back to a ~9K steps day. I also stopped taking painkillers since anyway they weren't of much help, and kept myself entertained with movies and other stuff (cough cough Grim Dawn , since it's a mindless click-kill-loot-repeat ARPG that one can play even when only partially functional). Today (Sunday) was swelling was slightly worse; however, I was feeling well enough to try to go back on the bike trainer (the first three days of "no sports" were over), and planned to do a slow/relaxing one hour Zwift ride. Right, as all the people who ever tried this, it works as long as only fast people overtake you (since you can't catch them anyway), or as long as you don't get to sprint sections. I did slightly improve my Watopia 300m sprint personal record (29.20s 28.29s), which was good enough. After the first lap I took it easier as in had to, since I was not really in shape. I was in any case very glad about ending my 4 days long break from sports! So, my dentist was right indeed. The swelling did by and large clear up over the weekend (although I'll have to see how tomorrow will be), and was also right about how much more difficult this was. On one hand makes sense (growing bone does sound complex), on the other hand, I couldn't imagine that the body works so hard that it puts you out. The dentist was however slightly wrong with the you should not do any strenuous activity, especially in the first three days ; they should have said, ha ha, you'll be flat out for the first days, take it easy and enjoy the painkillers instead. Looking forward now to get back into my regular routine; relaxation is good, but only when done by choice like most things in life

1 July 2016

Elena 'valhalla' Grandi: Busy/idle status indicator

Busy/idle status indicator




About one year ago, during my first http://debconf15.debconf.org/, I've felt the need for some way to tell people whether I was busy on my laptop doing stuff that required concentration or just passing some time between talks etc. and available for interruptions, socialization or context switches.

One easily available method of course would have been to ping me on IRC (and then probably go on chatting on it while being in the same room, of course :) ), but I wanted to try something that allowed for less planning and worked even in places with less connectivity.

My first idea was a base laptop sticker with two statuses and then a removable one used to cover the wrong status and point to the correct one, and I still think it would be nice, but having it printed is probably going to be somewhat expensive, so I shelved the project for the time being.

Immagine/foto

Lately, however, I've been playing with hexagonal stickers https://terinjokes.github.io/StickerConstructorSpec/ and decided to design something on this topic, whith the result in the figure above, with the hacking sticker being my first choice, and the concentrating alternative probably useful while surrounded by people who may misunderstand the term hacking .

While idly looking around for sticker printing prices I realized that it didn't necessarly have to be a sticker and started to consider alternatives.

One format I'm trying is inspired by "do not disturb" door signs: I've used some laminating pouches I already had around which are slightly bigger than credit-card format (but credit-card size would also work of course ) and cut a notch so that they can be attached to the open lid of a laptop.

Immagine/fotoImmagine/foto

They seem to fit well on my laptop lid, and apart from a bad tendency to attract every bit of lint in a radius of a few meters the form factor looks good. I'll try to use them at the next conference to see if they actually work for their intended purpose.

SVG sources (and a PDF) are available on my website http://www.trueelena.org/computers/projects/busy_idle_indicator.html under the CC-BY-SA license.

26 June 2016

Iustin Pop: Random things of the week - brexit and the pretzel

Random things of the week In no particular order (mostly). Coming back from the US, it was easier dealing with the jet-lag this time; doing sports in the morning or at noon and eating light on the evening helps a lot. The big thing of the week, that has everybody talking, is of course brexit. My thoughts, as written before on a facebook comment: Direct democracy doesn't really work if it's done once in a blue moon. Wikipedia says there have been thirteen referendums in UK since 1975, but most of them (10) on devolution issues in individual countries, and only three were UK-wide referendums (quoting from the above page): the first on membership of the European Economic Community in 1975, the second on adopting the Alternative vote system in parliamentary elections in 2011, and the third one is the current one. Which means that a referendum is done every 13 years or so. At this frequency, people are not a) used to inform themselves on the actual issues, b) believing that your vote actually will change things, and most likely c) not taking the "direct-democracy" aspect seriously (thinking beyond the issue at hand and how will it play together with all the rest of the political decisions). The result is what we've seen, leave politicians already backpedalling on issues, and confusion that yes, leave votes actually counted. My prognosis for what's going to happen: We'll see what happens though. Reading comments on various websites still make me cringe at how small some people think: "independence" from the EU when the real issue is EU versus the other big blocks US, China, in the future India; and "versus" not necessarily in a conflict sense, but simply as negotiating power, economic treaties, etc. Back to more down-to-earth things: this week was quite a good week for me. Including commutes, my calendar turned out quite nice: Week calendar The downside was that most of those were short runs or bike sessions. My runs are now usually 6.5K, and I'll try to keep to that for a few weeks, in order to be sure that bone and ligaments have adjusted, and hopefully keep injuries away. On the bike front, the only significant thing was that I did as well the Zwift Canyon Ultimate Pretzel Mission, on the last day of the contest (today): 73.5Km in total, 3h:27m. I've done 60K rides on Zwift before, so the first 60K were OK, but the last ~5K were really hard. Legs felt like logs of wood, I was only pushing very weak output by the end although I did hydrate and fuel up during the ride. But, I was proud of the fact that on the last sprint (about 2K before the end of the ride), I had ~34s, compared to my all-time best of 29.2s. Was not bad after ~3h20m of riding and 1300 virtual meters of ascent. Strava also tells me I got 31 PRs on various segments, but that's because I rode on some parts of Watopia that I never rode before (mostly the reverse ones). Overall, stats for this week: ~160Km in total (virtual and real, biking and running), ~9 hours spent doing sports. Still much lower than the amount of time I was playing computer games, so it's a net win Have a nice start of the week everyone, and keep doing what moves you forward!

19 June 2016

Iustin Pop: Short trip to Seattle area

After last week's bike ride, I had to pack my bags, and get on Monday morning on a plane to Seattle. The time was so short that I even left the bike mounted on the car. So 8:15, plane to Frankfurt, and then plane to Seattle. To my big surprise, the Lufthansa "extended leg room" seats were overly generous; I could actually extend my foot completely and put it on the back of the seat in front of me. Very good value when travelling in economy The only downside was that these were "standard" not "premium" economy, so the seat had leg room but was very narrow. And with normal sized adults on either side of me, it was somewhat difficult. The leg space allowed me to work on my laptop without fearing the person in front of me will recline their seat and break my screen (almost happened once). The funniest thing when travelling is that food is always tricky: even familiar food can be not what you expect. Case in point, me at the salad buffet, seeing slices of green vegetables, and asking myself: Are those jalapeno slices, or bell pepper slices? Hmm, I'm sure they're bell pepper , which resulted in my first vegetable salad with jalapeno. Would definitely recommend if you like spicy things! Otherwise the trip was as usual, but shorter and more densely packed with meetings with a one day exception: had the opportunity to experience for the first time Whirlyball, which was more fun and more difficult than it first looked. Also spent an afternoon on Whidbey Island, which fortunately was also the nicest day, weather-wise, of the week (all phone pictures, not colour corrected, straight-out-of-phone): from the ferry at the end of Hobbit Trail before dinner A few more pictures here. I keep being amazed by the nature in this area, definitely my preferred place in US from the relatively few I visited. Had a nice dinner as well at Cafe Langley, which surprisingly had reasonably-authentic Mediterranean food; the Baba Ganoush was excellent. Other than that outing, nothing worth describing, except that I really missed my Zwift or outdoor rides. The experience of using the stationary bikes in the hotel does not compare, so I resorted more to running on the treadmill (hmm, Zwift for running, hmm ); if the foot pod calibration is to be trusted, I continued to slightly improve my 1K, 1mi, and 5K times. Not bad, I might want to join some running races as well this summer, but I need to take it easy though and make sure to not get injured again. And finally, week over, flew back home, slept a bit mid-day (which will ruin my jet lag recovery program), unloaded my bike from the car (and checked it still works), and and and got on the trainer and did a Zwift ride. Jet lagged, but managed to beat my Watopia sprint record by a tiny bit, and complete a new workout ( The Gorby ), which was interesting. I can stop any time I want, definitely (I just need to take a trip away from home ).

12 June 2016

Iustin Pop: Elsa Bike Trophy 2016 my first bike race!

Elsa Bike Trophy 2016 my first bike race! So today, after two months of intermittent training using Zwift and some actual outside rides, I did my first bike race. Not of 2016, not of 2000+, but like ever. Which is strange, as I learned biking very young, and I did like to bike. But as it turned out, even though I didn't like running as a child, I did participate in a number of running events over the years, but no biking ones. The event Elsa Bike Trophy is a mountain bike event cross-country, not downhill or anything crazy; it takes part in Estavayer-le-Lac, and has two courses - one 60Km with 1'791m altitude gain, and a smaller one of 30Km with 845m altitude gain. I went, of course, for the latter. 845m is more than I ever did in a single ride, so it was good enough for a first try. The web page says that this smaller course est nerveux, technique et ne laisse que peu de r pit . I choose to think that's a bit of an exaggeration, and that it will be relatively easy (as I'm not too skilled technically). The atmosphere there was like for the running races, with the exception of bike stuff being sold, and people on very noisy rollers. I'm glad for my trainer which sounds many decibels quieter The long race started at 12:00, and the shorter one at 12:20. While waiting for the start I had to concerns in mind: whether I'm able to do the whole course (endurance), and whether it will be too cold (the weather kept moving towards rain). I had a small concern about the state of the course, as it was not very nice weather recently, but a small one. And then, after one hour plus of waiting, go! Racing, with a bit of "swimming" At first thing went as expected. Starting on paved roads, moving towards the small town exit, a couple of 14% climbs, then more flat roads, then a nice and hard 18% short climb (I'll never again complain about < 10%!), then entering the woods. It became quickly apparent that the ground in the forest was in much worse state than I feared. Much worse as in a few orders of magnitude. In about 5 minutes after entering the tree cover, my reasonably clean, reasonably light bike became a muddy, heavy monster. And the pace that until then went quite OK became walking pace, as the first rider that didn't manage to keep going up because the wheel turned out of the track blocked the one behind him, which had to stop, and repeat until we were one line (or two, depending on how wide the trail was) of riders walking their bikes up. While on dry ground walking your bike up is no problem, or hiking through mud with good hiking shoes is also no problem, walking up with biking shoes is a pain. Your foot slides and you waste half of your energy "swimming" in the mud. Once the climb is over, you get on the bike, and of course the pedals and cleats are full of heavy mud, so it takes a while until you can actually clip in. Here the trail version of SPD was really useful, as I could pedal reasonably well without being clipped-in, just had to be careful and push too hard. Then maybe you exit the trail and get on paved road, but the wheels are so full of mud that you still are very slow (and accelerate very slowly), until the shed enough of the mud to become somewhat more "normal". After a bit of this "up through mud, flat and shedding mud", I came upon the first real downhill section. I would have been somewhat confident in dry ground, but I got scared and got off my bike. Better safe than sorry was the thing for now. And after this is was a repetition of the above: climb, sometimes (rarely) on the bike, most times pushing the bike, fast flat sections through muddy terrain where any mistake of controlling the bike can send the front wheel flying due to the mud being highly viscous, slow flat sections through very liquid mud where it definitely felt like swimming, or any dry sections. My biggest fear, uphill/endurance, was unfounded. The most gains I made were on the dry uphills, where I had enough stamina to overtake. On flat ground I mostly kept order (i.e. neither being overtaken nor overtaking), but on downhill sections, I lost lots of time, and was overtaken a lot. Still, it was a good run. And then, after about 20 kilometres out of the 30, I got tired enough of getting off the bike, on the bike, and also tired mentally and not being careful enough, that I stopped getting off the bike on downhills. And the feeling was awesome! It was actually much much easier to flow through the mud and rocks and roots on downhill, even when it was difficult (for me) like 40cm drops (estimated), than doing it on foot, where you slide without control and the bike can come crashing down on you. It was a liberating feel, like finally having overcome the mud. I was soo glad to have done a one-day training course with Swiss Alpine Adventure, as it really helped. Thanks Dave! Of course, people were still overtaking me, but I also overtook some people (who were on foot; he he, I wasn't the only one it seems). And being easier, I had some more energy so I was able to push a bit harder on the flats and dry uphill sections. And then the remaining distance started shrinking, and the last downhill was over, I entered through familiar roads the small town, a passer-by cries "one kilometre left", I push hard (I mean, hard as I could after all the effort), and I reach the finish. Oh, and my other concern, the rain? Yes it did rain somewhat, and I was glad for it (I keep overheating); there was a single moment I felt cold, when exiting a nice cosy forest into a field where the wind was very strong headwind, of course. Lessons learned I did learn a lot in this first event. Results So, how did I do after all? As soon as I reached the finish and recovered my items, among which the phone, I checked the datasport page: I was rank 59/68 in my category. Damn, I hoped (and thought) I would do better. Similar % in the overall ranking for this distance. That aside, it was mighty fun. So much fun I'd do it again tomorrow! I forgot the awesome atmosphere of such events, even in the back of the rankings. And then, after I reach drive home and open on my workstation the datasport page, I get very confused: the overall number of participants was different. And the I realised: not everybody finished the race when I first checked (d'oh)! Final ranking: 59 out of 84 in my category, and 247/364 in the overall 30km rankings. That makes it 70% and 67% respectively, which matches somewhat with my usual running results a few years back (but a bit worse). It is in any case better than what I thought originally, yay! Also, Strava activity for some more statistics (note that my Garmin says it was not 800+ meters of altitude ): I'd embed a Veloviewer nice 3D-map but I can't seem to get the embed option, hmm TODO: train more endurance, train more technique, train in more various conditions!

30 May 2016

Daniel Stender: My work for Debian in May

No double posting this time ;-) I've got not so much spare time this month to spend on Debian, but I could work on the following packages: This series of blog postings also includes little introductions of and into new packages in the archive. This month there is: Pyinfra Pyinfra is a new project which is currently still in development state. It has been already pointed out in an interesting German article1, and is now available as package maintained within the Python Applications Team. It's currently a one man production by Nick Barrett, and eagerly developed in the past weeks (we're currently at 0.1~dev24). Pyinfra is a remote server configuration/provisioning/service deployment tool which belongs in the same software category like Puppet or Ansible2. It's for provisioning one or an array of remote servers with software packages and to configure them. Pyinfra runs agentless like Ansible, that means for using it nothing special (like a daemon) has to run on targeted servers. It's written to be used for provisioning POSIX compatible Linux systems and has alternatives when it comes to special features like package managers (e.g. supports apt as well as yum). The documentation could be found in usr/share/doc/pyinfra/html/. Here's a little crash course on how to use Pyinfra: The pyinfra CLI tool is used on the command line like this, deploy scripts, single operations or facts (see below) could be used on a single server or a multitude of remote servers:
$ pyinfra -i <inventory script/single host> <deploy script>
$ pyinfra -i <inventory script/single host> --run <operation>
$ pyinfra -i <inventory script/single host> --facts <fact>
Remote servers which are operated on must provide a working shell and they must be reachable by SSH. For connecting, --port, --user, --password, --key/--key-password and --sudo flags are available, --sudo to gain superuser rights. Root access or sudo rights of course have to be already set up. By the way, localhost could be operated on the same way. Single operations are organized in modules like "apt", "files", "init", "server" etc. With the --run option they could be used individually on servers like follows, e.g. server.user adds a new user on a single targeted system (-v adds verbosity to the pyinfra run):
$ pyinfra -i 192.0.2.10 --run server.user sam --user root --key ~/.ssh/sshkey --key-password 123456 -v
Multiple servers can be grouped in inventories, which hold the targeted hosts and data associated with them, like e.g. an inventory file farm1.py would contain lists like this:
COMPUTE_SERVERS = ['192.0.2.10', '192.0.2.11']
DATABASE_SERVERS = ['192.0.2.20', '192.0.2.21']
Group designators must be all caps. A higher level of grouping are the file names of inventory scripts, thus COMPUTE_SERVERS and DATABASE_SERVERS can be referenced to at the same time by the group designator farm1. Plus, all servers are automatically added to the group all. And, inventory scripts should be stored in the subfolder inventory/ in the project directory. Inventory files then could be used instead of specific IP addresses like this, the single operation then gets performed on all given machines in farm1.py:
$ pyinfra -i inventory/farm1.py  --run server.user sam --user root --key ~/.ssh/sshkey --key-password=123456 -v
Deployment scripts could be used together with group data files in the subfolder group_data/ in the project directory. For example, a group_data/farm1.py designates all servers given in inventory/farm1.py (by the way, all.py designates all servers), and contains the random attribute user_name (attributes must be lowercase), next to authentication data for the whole inventory group:
user_name = 'sam'
ssh_user = 'root'
ssh_key = '~/.ssh/sshkey'
ssh_key_password = '123456'
The random attribute can be picked up by a deployment script using host.data() like follows, user_name could be used again for e.g. server.user(), like this:
from pyinfra import host
from pyinfra.modules import server
server.user(host.data.user_name)
This deploy, the ensemble of inventory file, group data file and deployment script (usually placed top level in the project folder) then could be run that way:
$ pyinfra -i inventory/farm1.py deploy.py
You have guessed it, since deployment scripts are Python scripts they are fully programmable (please regard that Pyinfra is build & runs on Python 3 on Debian), and that's the main advantage point with this piece of software. Quite handy for that come Pyinfra facts, functions which check different things on remote systems and return information as Python data. Like e.g. deb_packages returns a dictionary of installed packages from a remote apt based server:
$ pyinfra -i 192.0.2.10 --fact deb_packages --user root --key ~/.ssh/sshkey --key-password=123456
 
    "192.0.2.10":  
        "libdebconfclient0": "0.192",
        "python-debian": "0.1.27",
        "libavahi-client3": "0.6.31-5",
        "dbus": "1.8.20-0+deb8u1",
        "libustr-1.0-1": "1.0.4-3+b2",
        "sed": "4.2.2-4+b1",
Using facts, Pyinfra reveals its full potential. For example, a deployment script could go like this, linux.distribution() returns a dict containing the installed distribution:
from pyinfra import host
from pyinfra.modules import apt
if host.fact.linux_distribution['name'] == 'Debian':
   apt.packages(packages='gummi', present=True, update=True)
elif host.fact.linux_distribution['name'] == 'CentOS':
   pass
I'll spare more sophisticated examples to keep this introduction simple. Beyond fancy deployment scripts, Pyinfra features an own API by which it could be programmed from the outside, and much more. But maybe that's enough to introduce Pyinfra. That are the usage basics. Pyinfra is a brand new project and it remains to be seen whether the developer can keep on further developing the tool like he does these days. For a private project it's insane to attempt to become a contender for the established "big" free configuration management tools and frameworks, but, if Puppet has become too complex in the meanwhile or not3, I really don't think that's the point here. Pyinfra follows an own approach in being programmable the way which it is. And it's definitely not harm to have it in the toolbox already, not trying to replace nothing. Brainstorm After the first package has been in experimental, the Brainstorm library from Swiss AI research institute IDSIA4 is now available as python3-brainstorm in unstable. Brainstorm is a lean, easy-to-use library for setting up deep learning networks (multiple layered artificial neural networks) for machine learning applications like for image and speech recognition or natural language processing. To set up a working training network for a classifier for handwritten digits like the MNIST dataset (a usual "hello world") just takes a couple of lines, like an example demonstrates. The package is maintained within the Debian Python Modules Team. The Debian package ships a couple of examples in /usr/share/python3-brainstorm/examples (the data/ and examples/ folders of the upstream tarball are combined here). Among them there are5: The current documentation in /usr/share/doc/python3-brainstorm/html/ isn't complete yet (several chapters are under construction), but there's a walkthrough on the CIFAR-10 example. The MNIST example has been extended by Github user pinae, and has been explained in German C't recently6. What are the perspectives for further development? Like Zhou Mo confirmed, there are a couple of deep learning frameworks around having a rather poor outlook since there have been abandoned after being completed as PhD projects. There's really no point for thriving to have them all in Debian, like the ITP of Minerva has been given up partly for this reason, there weren't any commits since 08/2015 (and because cuDNN isn't available and most likely won't). Brainstorm, 0.5 have been released 05/2015, also was a PhD project as IDSIA. It's stated on Github that the project is "under active development", but the rather sparse project page on the other side expresses the "hope the community will help us to further improve Brainstorm". This sentence much often implies that the developers are not actively working on the project. But there are recent commits and it looks that upstream is active and could be reached when there are problems, and that the project is active. So I don't think we're riding a dead horse, here. The downside for Brainstorm in Debian is, it seems that the libraries which are needed for GPU accelerated processing can't be fully provided. Pycuda is available, but scikit-cuda (an additional library which provides wrappers for CUDA features like CUBLAS, CUFFT and CUSOLVER) is not and won't be, because the CULA Dense Toolkit (scikit-cuda also contains wrappers for also that) is not available freely as source. Because of that, a dependency against pycuda, not even as Suggests (it's non-free), has been spared. Without GPU acceleration, Brainstorm computes the matrices on openBLAS using a Cython wrapper on the NumpyHandler, and the PyCudaHandler couldn't be used. openBLAS makes pretty good use of the available hardware (it distributes over all available CPU cores), but it's not yet possible to run Brainstorm full throttle using available floating point devices to reduce training times, which becomes crucial when the projects are getting bigger. Brainstorm belongs to the number of deep learning frameworks already being or becoming available in Debian. Currently there is: I've checked over Microsoft's CNTK, but although it's also set free recently I have my doubts if that could be included. Apparently there are dependencies against non-free software and most likely other issues. So much for a little update on the state of deep learning in Debian, please excuse if my radar misses something.

  1. Tim Sch rmann: "Schlangen l: Automatisiertes Service-Deployment mit Pyinfra". In: IT-Administrator 05/2016, pp. 90-95.
  2. For a comparison of configuration management software like this, see B wetter/Johannsen/Steig: "Baukastensysteme: Konfigurationsmanagement mit Open-Source-Software". In: iX 04/2016, pp. 94-99 (please excuse the prevalence of German articles in the pointers, I've just have them at hand).
  3. On the points of critique on Puppet, see Martin Loschwitz: "David gegen Goliath Zwei Welten treffen aufeinander: Puppet und Ansible". In Linux-Magazin 01/2016, 50-54.
  4. See the interview with IDSIA's deep learning guru J rgen Schmidhuber in German C't 2014/09, p. 148
  5. The examples scripts need some more finetuning. To run the data creation scripts in place the environment variable BRAINSTORM_DATA_DIR could be set, but the trained networks are currently tried to write in place. So please copy the scripts into some workspace if you want to try them out. I'll patch the example scripts to run out-of-the-box, soon.
  6. Johannes Merkert: "Ziffernlerner. Ein k nstliches neuronales Netz selber gebaut". In: C't 2016/06, p. 142-147. Web: http://www.heise.de/ct/ausgabe/2016-6-Ein-kuenstliches-neuronales-Netz-selbst-gebaut-3118857.html.
  7. See Ramon Wartala: "Tiefensch rfe: Deep learning mit NVIDIAs Jetson-TX1-Board und dem Caffe-Framework". In: iX 06/2016, pp. 100-103
  8. https://lists.debian.org/debian-science/2016/03/msg00016.html

29 May 2016

Iustin Pop: Mind versus body: time perception

Mind versus body: time perception Since mid-April I'm playing a new game. It's really awesome, and I learned some surprising things. The game Zwift is quite different from the games I'm usually playing. While it does have all or most of the elements of a game, more precisely an MMO, the main point of the game if physical exercise (in the real world). The in-game performance if the result of the (again, real-world) power output. Playing the game is more or less like many other games: very nice graphics, varied terrain (or not), interaction, or better said competition, with other players, online leader boards, races, gear "upgrade" (only cosmetic AFAIK), etc. The game more or less progresses like usual, but the fact that the main driver is body changes, to my surprise, the time component of the game. For me, with a normal game let's say one of Bioware's Dragon Age games, or one of CD Red's Witcher games a short gaming session is 2-3 hours, a reasonable session 6-8 hours, and longer ones are for "marathon" gaming sessions. Playing a good game for one hour feels like you've been cheated one barely starts and has to stop. On Zwift, things are different. A short session is 20-30 minutes, but this already feels good. A good one is more than one hour, and for me, the longest rides I had were three hours. A three hour session, if done at or near Functional Threshold Power (see here for another article about it), leaves me spent. I just had today such a long ride (at around 85% FTP) and it took me an hour afterwards (and eating) to recover. The interesting part is that, body exertion aside, the brain sees a 3 hour Zwift equivalent to an 8-10 hour gaming session. Both are tiring, and the perception of passed time is the same (long). Same with shorter sessions: if I do a 40 minutes ride, it feels subjectively as rewarding as a 2-3 hour normal gaming session. I wonder what mechanism is that influences this perception. Is it just effort level? But there's no real effort (as in increased heart rate) for computer games. Is it the fact that so much blood is needed for the muscles when cycling that the brain gets comparatively little, so it enters slow-speed mode (hey, who pressed the Turbo button)? In any case, using Zwift results in a much more efficient use of my time when I'm playing just to decompress/relax. Another interesting difference is how much importance a good night sleep has on body performance. With computer games, it makes a difference, but not a huge one, and it usually goes away a couple of hours in the game, at least subjectively. With cycling, a bad night results in persistent lower performance all around (for me at least), and one that you easily feel (e.g. for max 5-second average power). And the last thing I learned, although this shouldn't be a surprise: my FTP is way lower than it's supposed to be (according to the internet). I guess the hundreds of hours I put into pure computer games didn't do anything to my fitness, to my "surprise". I'm curious to see, if I can keep this going on, how things will look like in ~6 months or so.

6 May 2016

Norbert Preining: Michael K hlmeier: Zwei Herren am Strand

This recent book of the Austrian author Michael K hlmeier, Zwei Herren am Strand (Hanser Verlag), spins a story about an imaginative friendship between Charlie Chaplin and Winston Churchill. While there might not as be more different people than these two, in the book they are connected by a common fight the fight against their own depression, explicitly as well as implicitly by fighting Nazi Germany. Zwei Herren am Strand_ Roman - Michael Koehlmeier In a series of short chapters the story of these two great man from the beginning of the last century is told, mixing reality with a good portion of fantasy, especially when it comes to their meetings and uncommon friendship. For me this book is in best tradition of Michael K hlmeier, whom I consider an excellent story teller. I loved his (unfinished trilogy of) books on Greek mythology (Telemach and Calypso), but found that after these books he got lost too much in radio programs of story telling. While in itself good, I preferred his novels. Thus, I have to admit that I have forgotten about K hlmeier for some years, until recently I found this little book, which reminded me of him and his excellent stories. A book that is if you are versed in German well worth enjoying, especially if one likes funny and a bit queer stories.

26 April 2016

Rhonda D'Vine: Prince

Last week we lost another great musician, song writer, artist. It's painful to realise that more and more of the people you grew up with aren't there anymore. We lost Prince, TAFKAP, Symbol, Prince. He wrote a lot of great music, even some you wouldn't attribute to him, like Sinead O'Connor's Nothing Compares To You, Bangles' Manic Monday or Chaka Khan's I Feel For You. But I actually would like to share some songs that are also performed by himself, so without further ado here are the songs: Rest in peace, Prince. And you, enjoy.

/music permanent link Comments: 0 Flattr this

31 March 2016

Antoine Beaupr : My free software activities, march 2016

Debian Long Term Support (LTS) This is my 4th month working on Debian LTS, started by Raphael Hertzog at Freexian. I spent half of the month away on a vacation so little work was done, especially since I tried to tackle rather large uploads like NSS and Xen. I also worked on the frontdesk shift last week.

Frontdesk That work mainly consisted of figuring out how to best help the security team with the last uploads to the Wheezy release. For those who don't know, Debian 7 Wheezy, or "oldstable", is going to be unsupported by the security team starting end of april, and the Debian 6 Squeeze (the previous LTS) is now unsupported. The PGP signatures on the archived release have started yielding expiration errors which can be ignored but that are really a strong reminder that it is really time to upgrade. So the LTS team is now working towards backporting a few security issues from squeeze to wheezy, and this is what I focused on during triage work. I have identified the following high priority packages I will work on after I complete my work on the Xen and NSS packages (detailed below):
  • libidn
  • icu
  • phpmyadmin
  • tomcat6
  • optipng
  • srtp
  • dhcpcd
  • python-tornado

Updates to NSS and Xen I have spent a lot of time testing and building packages for NSS and Xen. To be fair, Brian May did most of the work on the Xen packages, and I merely did some work to test the packages on Koumbit's infrastructure, something which I will continue doing in the next month. For NSS, wheezy and jessie are in this weird state where patches were provided to the security team all the way back in November yet were never tested. Since then, yet more issues came up and I worked hard to review and port patches for those new security issues to wheezy. I'll followup on both packages in the following month.

Other free software work

Android TL;DR: there's an even longer version of this with the step-by-step procedures and that I will update as time goes on in my wiki. I somehow inherited an Android phone recently, on a loan from a friend because the phone broke one too many times and she got a new one from her provider. This phone is a HTC One S "Ville", which is fairly old, but good enough to play with and give me a mobile computing platform to listen to podcasts, play music, access maps and create GPS traces. I was previously doing this with my N900, but that device is really showing its age: very little development is happening on it, the SDK is closed source and the device itself is fairly big, compared to the "Ville". Plus, the SIM card actually works on the Ville so, even though I do not have an actual contract with a cell phone provider (too expensive, too invasive on my privacy), I can still make emergency phone calls (911)! Plus, since there is good wifi on the machine, I can use it to connect to the phone system with the built-in SIP client, send text messages through SMS (thanks to VoIP.ms SMS support) or Jabber. I have also played around with LibreSignal, the free software replacement for Signal, which uses proprietary google services. Yes, the VoIP.ms SMS app also uses GCM, but hopefully that can be fixed. (While I was writing this, another Debian Developer wrote a good review of Signal so I am happy to skip that step. Go read that.)
See also my apps list for a more complete list of the apps I have installed on the phone. I welcome recommendations on cool free software apps I should use!
I have replaced the stock firmware on the phone with Cyanogenmod 12.1, which was a fairly painful experience, partly because of the difficult ambiance on the #cyanogenmod channel on Freenode, where I had extreme experiences: a brave soul helped me through the first flashing process for around 2 hours, nicely holding my hand at every step. Other times, I have seen flames and obtuse comments from people being vulgar, brutal, obnoxious, if not sometimes downright homophobic and sexist from other users. It is clearly a community that needs to fix their attitude. I have documented everything I could in details in this wiki page, in case others want to resuscitate their old phones, but also because I ended up reinstalling the freaking phone about 4 times, and was getting tired of forgetting how to do it every time. I am somewhat fascinated by Android: here is the Linux-based device that should save us all from the proprietary Apple nightmares of fenced in gardens and censorship. Yet, public Android downloads are hidden behind license agreements, even though the code itself is free, which has led fellow Debian developers to work on making libre rebuilds of Androids to workaround this insanity. But worse: all phones are basically proprietary devices down to the core. You need custom firmware to be loaded on the machine for it to boot at all, from the bootloader all the way down to the GSM baseband and Wifi drivers. It is a minefield of closed source software, and trying to run free software on there is a bit of a delusion, especially since the baseband has so much power over the phone. Still, I think it is really interesting to run free software on those machines, and help people that are stuck with cell phones get familiar with software freedom. It seems especially important to me to make Android developers aware of software freedom considering how many apps are available for free yet on which it is not possible to contribute significantly because the source code is not published at all, or published only on the Google Store, instead of the more open and public F-Droid repository which publishes only free software. So I did contribute. This month, I am happy to announce that I contributed to the following free software projects on Android: I have also reviewed the literature surrounding Telegram, a popular messaging app rival to Signal and Whatsapp. Oddly enough, my contributions to Wikipedia on that subject were promptly reverted which made me bring up the subject on the page's Talk page. This lead to an interesting response from the article's main editors which at least added that the "its security features have been contested by security researchers and cryptography experts". Considering the history of Telegram, I would keep people away from it and direct people to use Signal instead, even though Signal has similar metadata issues, mostly because Telegram's lack of response to the security issues that were outlined by fellow security researchers. Both systems suffer from a lack of federation as well, which is a shame in this era of increasing centralization. I am not sure I will put much more work in developing for Android for now. It seems like a fairly hostile platform to work on, and unless I have specific pain points I want to fix, it feels so much better to work on my own stuff in Debian. Which brings me to my usual plethora of free software projects I got stuck in this month.

IRC projects irssi-plugin-otr had a bunch of idiotic bugs lying around, and I had a patch that I hadn't submitted upstream from the Debian package, which needed a rebuild because the irssi version changed, which is a major annoyance. The version in sid is now a snapshot because upstream needs to make a new release but at least it should fix things for my friends running unstable and testing. Hopefully those silly uploads won't be necessary in the future. That's for the client side. On the server side, I have worked on updating the Atheme-services package to the latest version, which actually failed because the upstream libmowgli is missing release tags, which means the Debian package for it is not up-to-date either. Still, it is nice to have a somewhat newer version, even though it is not the latest and some bugs were fixed. I have also looked at making atheme reproducible but was surprised at the hostility of the upstream. In the end, it looks like they are still interested in patches, but they will be a little harder to deploy than for Charybdis, so this could take some time. Hopefully I will find time in the coming weeks to test the new atheme services daemon on the IRC network I operate.

Syncthing I have also re-discovered Syncthing, a file synchronization software. Amazingly, I was having trouble transferring a single file between two phones. I couldn't use Bluetooth (not sure why), the "Wifi sharing" app was available only on one phone (and is proprietary, and has a limit of 4MB files), and everything else requires an account, the cloud, or cabling. So. Just heading to f-droid, install syncthing, flash a few qr-codes around and voil : files are copied over! Pretty amazing: the files were actually copied over the local network, using IPv6 link-local addresses, encryption and the DHT. Which is a real geeky way of saying it's completely fast, secure and fast. Now, I found a few usability issues, so much that I wrote a whole usability story for the developers, which were really appreciative of my review. Some of the issues were already fixed, others were pretty minor. Syncthing has a great community, and it seems like a great project I encourage everyone to get familiar with.

Battery stats The battery-status project I mentionned previously has been merged with the battery-stats project (yes, the names are almost the same, which is confusing) and so I had to do some work to fix my Python graph script, which was accepted upstream and will be part of Debian officially from now on, which is cool. The previous package was unofficial. I have also noticed that my battery has a significantly than when I wrote the script. Whereas it was basically full back then, it seems now it has lost almost 15% of its capacity in about 6 months. According to the calculations of the script:
this battery will reach end of life (5%) in 935 days, 19:07:58.336480, on 2018-10-23 12:06:07.270290
Which is, to be fair, a good life: it will somewhat work for about three more years.

Playlist, git-annex and MPD in Python On top of my previously mentioned photos-import script, I have worked on two more small programs. One is called get-playlist and is an extension to git-annex to easily copy to the local git-annex repository all files present in a given M3U playlist. This is useful for me because my phone cannot possibly fit my whole MP3 collection, and I use playlists in GMPC to tag certain files, particularly the Favorites list which is populated by the "star" button in the UI. I had a lot of fun writing this script. I started using elpy as an IDE in Emacs. (Notice how Emacs got a new webpage, which is a huge improvement was had been basically unchanged since the original version, now almost 20 years old, and probably written by RMS himself.) I wonder how I managed to stay away from Elpy for so long, as it glues together key components of Emacs in an elegant and functional bundle:
  • Company: the "vim-like" completion mode i had been waiting for forever
  • Jedi: context-sensitive autocompletion for Python
  • Flymake: to outline style and syntax errors (unfortunately not the more modern Flycheck)
  • inline documentation...
In short, it's amazing and makes everything so much easier to work with that I wrote another script. The first program wouldn't work very well because some songs in the playlists had been moved, so I made another program, this time to repair playlists which refer to missing files. The script is simply called fix-playlists, and can operate transparently on multiple playlists. It has a bunch of heuristics to find files and uses a MPD server as a directory to search into. It can edit files in place or just act as a filter.

Useful snippets Writing so many scripts, so often, I figured I needed to stop wasting time always writing the same boilerplate stuff on top of every file, so I started publishing Yasnippet-compatible file snippets, in my snippets repository. For example, this report is based on the humble lts snippet. I also have a base license snippet which slaps the AGPLv3 license on top of a Python file. But the most interesting snippet, for me, is this simple script snippet which is a basic scaffolding for a commandline script that includes argument processing, logging and filtering of files, something which I was always copy-pasting around.

Other projects And finally, a list of interesting issues en vrac:
  • there's a new Bootstrap 4 theme for Ikiwiki. It was delivering content over CDNs, which is bad for privacy issues, so I filed an issue which was almost immediately fixed by the author!
  • I found out about BitHub, a tool to make payments with Bitcoins for patches submitted on OWS projects. It wasn't clear to me where to find the demo, but when I did, I quickly filed a PR to fix the documentation. Given the number of open PRs there and the lack of activity, I wonder if the model is working at all...
  • a fellow Debian Developer shared his photos workflow, which was interesting to me because I have my own peculiar script, which I never mentionned here, but which I ended up sharing with the author

Lars Wirzenius: New job: QvarnLabs

Today was my last day at Suomen Tilaajavastuu, where I worked on Qvarn. Tomorrow is my first day at my new job. The new job is for a new company, tentatively named QvarnLabs (registration is in process), to further develop and support Qvarn. The new company starts operation tomorrow, so you'll have to excuse me that there isn't a website yet. Qvarn provides a secure, RESTful JSON HTTP API for storing and retrieving data, with detailed access control (and I can provide more buzzwords if necessary). If you operate in the EU, and store information about people, you might want to read up about the General Data Protection Regulation, and Qvarn may be a possible part of a solution you want to look into, once we have the website up.

28 March 2016

Rhonda D'Vine: Ich bin was ich bin

As my readers probably are well aware, I wrote my transgender coming out poem Mermaids over 10 years ago, to make it clear to people how I define, what I am and how I would hope they could accept me. I did put it publicly into my blog so I could point people to it. And I still do so regularly. It still comes from the bottom of my heart. And I am very happy that I got the chance to present it in a Poetry Slam last year, it was even recorded and uploaded to YouTube. There is just one thing that I was also told over the time every now and then by some people that I would have liked to understand what's going on: Why is it in English, my English isn't that good. My usual response was along the lines of that the events that triggered me writing it were in an international context and I wanted to make sure that they understood what I wrote. At that time I didn't realize that I am cutting out a different group of people from being able to understand what's going on inside me. So this year there was a similar event: the Flawless Poetry Slam which touched the topics of Feminist? Queer? Gender? Rolemodels? - Let's talk about it. I took that as motivation to finally write another text on the topic, and this time in German. Unfortunately though I wasn't able to present it that evening, I wasn't drawn for the lineup. But, I was told that there was another slam going on just last wednesday, so I went there ... and made it onto the stage! And this is the text that I presented there. I am uncertain how well online translators work for you, but I hope you get the core points if you don't understand German:
Ich bin was ich bin
F nf Worte mit wahrem Sinn:
Ich bin was ich bin Du denkst: "Mann im Rock?
Das ist ja wohl l cherlich,
der ist sicher schwul." "Fingernagellack?
Na da schau ich nicht mehr hin,
wer will das schon seh'n." Jedoch liegst du falsch,
Mit all deinen Punkten, denn:
Ich bin was ich bin. Ich bin Transgender
Und erlebe mich selber,
ich bin eine Frau. "Haha, eine Frau?
Wem willst du das weismachen?
Heb mal den Rock hoch!" Und wie ist's bei dir?
Was ist zwischen den Beinen?
Geht mich das nichts an? Warum fragst du mich?
Da ist's dann in Ordnung?
Oder vielleicht nicht? Ich bin was ich bin
F nf Worte mit ernstem Sinn:
Ich bin was ich bin Ich steh weiblich hier
Und das hier ist mein K rper
Mein Geschlecht ist's auch Oberfl chlichkeit
Das ist mein gr tes Problem
Schl gt mir entgegen Wenn ich mich ffne
Verst ndnis fast berall
Es wird akzeptiert Doch gelegentlich
und das schmerzt mich am meisten
sagt doch mal wer "er" Von Fremden? Egal
Doch hab ich mich ge ffnet
Ist es eine Qual "Ich seh dich als Mann"
Da ist, was es transportiert
Akzeptanz? Dahin Meine Pronomen
Wenn ihr ber mich redet
sind sie, ihr, ihres Ich leb was ich leb
F nf Worte mit tiefem Sinn:
Ich bin was ich bin "Doch, wie der erst spricht!
Ich meinte, wie sie denn spricht!
Das ist nicht normal." Ich schreib hier Haikus:
Japanische Gedichtsform
Mit fixem Versmars Sind f nf, sieben, f nf
Silben in jeder Zeile
Haikus sind simpel Probier es mal aus
Transportier eine Message
Es macht auch viel Spa Wortwahl ist wichtig
Ein guter Thesaurus hilft
Sei kurz und pr gnant Ich sag was ich sag
F nf Worte mit klugem Sinn:
Ich bin was ich bin Doch ich schweife ab
Verst ndnis fast berall?
Wird es akzeptiert? Erstaunlicherweise
Doch ich bin auch was and'res
Und hier geht's bergab Eine Sache gibt's
Die erw h'n ich besser nicht
f r die steck ich ein "Deshalb bin ich hier"
So der Titel eines Lieds
verfasst von Thomas D "Wenn ich erkl re
warum ich mich wie ern hr"
So weit komm ich nicht Man erw hnt Vegan
Die Intoleranz ist da
Man ist unten durch "Mangelerscheinung!"
"Das Essen meines Essens!"
Akzeptanz ade Hab 'ne Theorie:
Vegan sein: 'ne Entscheidung
Transgender sein nicht Mensch f hlt sich dann schlecht
dass bei sich selbst die Kraft fehlt
und greift damit an "Ich k nnte das nicht"
Ich verurteile dich nicht
Iss doch was du willst Ich zwing es nicht auf
Aber R cksicht w r schon fein
Statt nur Hohn und Schm h Ich ess was ich ess
F nf Worte zum nachdenken:
Ich bin was ich bin
Hope you get the idea. The audience definitely liked it, the jury wasn't so much on board but that's fine, it's five random people and it's mostly for fun anyway. Later that night though some things happened that didn't make me feel so comfortable anymore. I went to the loo, waiting in line with the other ladies, a bit later the waitress came along telling me "the men's room is over there". I told her that I'm aware of that and thanked her, which got her confused and said something along the lines of "so you are both, or what?" but went away after that. Her tone and response wasn't really giving me much comfort, though none of the other ladies in the line did look strangely.
But the most disturbing event after that was to find out about North Carolina signed the bathroom bill making it illegal for trans people to use the bathroom for their gender and insisting on using the one for the gender they were assigned at birth. So men like James Sheffield are now forced to go to the lady's restroom, or face getting arrested. Brave new world. :/ So, enjoy the text and don't get too wound up by stupid laws and hope for time to fix people's discriminatory minds for fixing issues that already are regulated: Assaults are assaults and are already banned. Arguing with people might get assaulted and thus discriminating trans people is totally missing the point, by miles.

/personal permanent link Comments: 2 Flattr this

Next.

Previous.